The Supervised IBP: Neighbourhood Preserving Infinite Latent Feature Models
نویسندگان
چکیده
WHAT: a probabilistic model to infer binary latent variables that preserve neighbourhood structure of the data • WHY: to perform a nearest neighbour search for the purpose of retrieval • WHEN: in dynamic and streaming nature of the Internet data • HOW: the Indian Buffet Process prior coupled with a preference relation • WHERE: dynamic extension of hash codes Motivating Example: Dynamic Hash Codes Extension Hash function We have { { We want to add Br ow n Ye llo w Sp ot s Lo ng n ec k
منابع مشابه
Fast Search for Infinite Latent Feature Models
We propose several search based alternatives for inference in the Indian Buffet Process (IBP) based models. We consider the case when we only want a maximum a posteriori (MAP) estimate of the latent feature assignment matrix. If true posterior samples are required, these MAP estimates can also serve as intelligent initializers for MCMC based algorithms. Another advantage of the proposed methods...
متن کاملCorrelated Non-Parametric Latent Feature Models
We are often interested in explaining data through a set of hidden factors or features. When the number of hidden features is unknown, the Indian Buffet Process (IBP) is a nonparametric latent feature model that does not bound the number of active features in dataset. However, the IBP assumes that all latent features are uncorrelated, making it inadequate for many realworld problems. We introdu...
متن کاملNonparametric Bayesian discrete latent variable models for unsupervised learning
The analysis of real-world problems often requires robust and flexible models that can accurately represent the structure in the data. Nonparametric Bayesian priors allow the construction of such models which can be used for complex real-world data. Nonparametric models, despite their name, can be defined as models that have infinitely many parameters. This thesis is about two types of nonparam...
متن کاملMaximum Entropy Discrimination Denoising Autoencoders
Deep generative models (DGMs) have brought about a major breakthrough, as well as renewed interest, in generative latent variable models. However, an issue current DGM formulations do not address concerns the data-driven inference of the number of latent features needed to represent the observed data. Traditional linear formulations allow for addressing this issue by resorting to tools from the...
متن کاملLearning Latent Features with Infinite Non-negative Binary Matrix Tri-factorization
Non-negative Matrix Factorization (NMF) has been widely exploited to learn latent features from data. However, previous NMF models often assume a fixed number of features, say p features, where p is simply searched by experiments. Moreover, it is even difficult to learn binary features, since binary matrix involves more challenging optimization problems. In this paper, we propose a new Bayesian...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1309.6858 شماره
صفحات -
تاریخ انتشار 2013